In this lab you will learn
library(tidyverse)
library(gridExtra) ## for plotting function grid.arrange()
library(deSolve) ## for predator-prey-resource ODE model
library(magrittr) ## for pipe symbol %<>%
library(knitr) ## for table-viewing function kable()
library(readxl) ## for data reading function read_excel()
library(rio) ## for data-reading function import()
library(growthrates) ## for dataset bactgrowth
A defining property of living organisms is the ability to reproduce, or self-replicate. One individual becomes two, two become four, four become eight, etc. However, there is something uncanny about this process. In the words of Vandermeer & Goldberg, “It is often surprising how quickly a self-reproducing event becomes a big event.” To illustrate the issue, they offer a little thought experiment:
Q1. Suppose you have a pond with some lily pads in it, and suppose each lily pad replicates itself once per week. If it takes a year for half the pond to become covered with lily pads, how long will it take for the entire pond to be covered?
Water lilies in a pond
The lily pad Gedankenexperiment is an example of exponential growth.
Here is another one: According to a famous legend, the emperor of India offered to the creator of chess a reward of their choice for such a brilliant invention. The inventor requested one grain of wheat for the first square on the chessboard, two grains for the second square, four grains for the third square, eight for the fourth square, and so on. The emperor, amazed at being asked for such a meager prize, promptly agreed. But that was a mistake the emperor came to regret.
Q2: How many grains of wheat did the emperor owe the inventor? Hint: recalling that the elements in the series 1, 2, 4, 8, … are the consecutive powers of 2, you can work out the sum with a single line of R code. Provide your code.
The number of grains of wheat grows exponentially from one square of the chessboard to the next, with a growth rate of 2. The emperor failed to appreciate how fast exponential growth can be!
Note: This story, first recorded in 1256 by Ibn Khallikan in his biographical encyclopedia of Muslim scholars, documents the fascination of ancient scholars with the explosive nature of exponential growth. It is apocryphal, of course. Chess was not invented by a single person at one point in time, but evolved over centuries from its 6th-century origins in the Indian game of chaturanga.
As you can see, exponential growth starts seemingly slow but then gets very big very fast. This may make it seem as if exponential growth is merely a mathematical curiosity. However, there are circumstances where it is indeed observed in nature.
Suppose you are an agronomist in charge of managing a corn- and bean- producing milpa (a small Mesoamerican crop-growing field). You notice the corn plants have been infested with aphids, a small herbivorous insect which feeds on plant sap. The infestation began on March 18, and since then you have ascertained that the population of aphids has been growing as follows:
Table: Number of aphids per cornstalk in a milpa in the Highlands of Guatemala (Morales, 1998) (from Vandermeer & Goldberg 2013)
| Date | Week | Aphids per plant |
|---|---|---|
| March 25 | 1 | 0.02 |
| April 1 | 2 | 0.50 |
| April 8 | 3 | 1.50 |
| April 15 | 4 | 5.00 |
| April 22 | 5 | 14.50 |
Aphids on plant host, by Andreas Eichler
When in sufficient numbers, aphids can affect crop yield. You estimate that if the population exceeds 40 aphids per plant, the farmer must take action to control it. The current population is well within acceptable levels, however the fast growth raises concerns.
The plot above shows accelerated growth: each week, the increase in the aphid population is larger than than the increase in the previous week. In fact, this looks like exponential growth, where the population grows by a constant multiplicative factor every week. This can be further confirmed by plotting the logarithm of the number of aphids per plant against time:
The plot shows reasonably linear behavior between \(\log(N)\) and time. As such, we can fit a linear regression model to it and replot:
The slope of the linear regression indicates that every week the logarithm \(\log(N)\) of the number of aphids per plant increases by 1.55. This means that \(N\) itself increases by a constant multiplicative factor \(e^{1.55} \simeq 4.7\) per week. In other words, each aphid contributes 4.7 aphids per plant per week (whether by adding 3.7 new aphids per plant, or dying after leaving 4.7 descendants).
The intercept of the linear regression states that in Week 0, on March 18, there were initially \(e^{-4.63} = 0.0098\) aphids per plant, or approximately 1 aphid for every 100 plants.
We conclude that in five weeks the infestation grew exponentially from 1 aphid per 100 plants to almost 15 aphids per plant, at a rate of \(e^{1.55}\) new aphids per aphid per plant. Mathematically, we write \(N(t) = 0.0098\,e^{1.55t}\), where \(t\) is the time in weeks since March 18.
How long will it take for the infestation to reach the critical point of 40 aphids per plant? We can find that out by solving the equation \(40 = 0.0098\,e^{1.55t}\) for \(t\). After some algebra (or plugging this expression in WolframAlpha), we find that \(t = \frac{1}{1.55}\log\left(\frac{40}{.0098}\right) \simeq 5.36\). Which means we will reach 40 aphids per plant after 5.36 weeks, or 37.5 days, from March 18, which falls on April 24.
Notice that it took 5 weeks for the aphid infestation to reach just about half of its critical point, and then only 2 more days to get there!
Q3. On what day will the aphid population reach 80 insects per plant?
Q4. An aphid weighs about 0.2 milligrams. Assuming a cornstalk weighs about 5kg, how does the total mass of aphids on a cornstalk compare to the weight of the plant on Week 14?
SARS-CoV-2. Credit: University of Bristol, UK
Another situation where one might observe exponential growth is the early days of an epidemic, when a new pathogen ravages through a host population with no previous immunity. Recently, this was the case of Covid-19.
The code below reads publicly available Covid data from a URL (my GitHub page), turns it into a tibble (recall Lab 0), then converts the dates from character format into date format.
data =
'https://github.com/rafaeldandrea/BIO-356/blob/master/covid_data.rds?raw=true' |>
url() |>
readRDS() |>
as_tibble() |>
mutate(date = as.Date(date))
This is a rich data set with information on new cases, total cases, hospitalizations, etc, broken down by country and covering the entire duration of the epidemic, since its beginnings in late 2019 to present day. Four our purposes, we will focus on a single country and a specific period of time, and we will concern ourselves with the (smoothed) new cases count.
The code below specifies our country and period of interest, then subsets the data:
country = 'United Kingdom'
initial_date = '2020-03-01'
final_date = '2020-04-01'
filtered_data =
data |>
filter(
location == country,
date >= initial_date,
date < final_date
)
As you may recall, the period in question, March of 2020, was the first wave of Covid-19 infection in the UK.
The code below creates two plots. They plot the same data, except
that plot_log plots the y-axis (number of new Covid cases)
in logarithmic scale. Both plots show a trend line (i.e. a linear
regression of the y-axis on the x-axis).
plot =
filtered_data |>
ggplot(aes(date, new_cases_smoothed)) +
geom_point() +
geom_smooth(method = 'lm') +
labs(
x = 'Date',
y = 'New cases'
) +
theme(aspect.ratio = 1)
plot_log =
plot +
scale_y_log10()
Q5. Copy the code from the three code chunks above into a code chunk. a) Call both plots and show them in your compiled file. In which of them (linear scale or log scale) does the trend line fit the data better? Explain your answer by b) identifying the type of growth displayed in these plots, c) describing what this type of growth looks like in logarithmic scale.
We can in fact use the trend line to estimate the rate of growth of
Covid during those days. The slope of the line in plot_log
is about \(0.19 = \log(1.2)\). This
means we can write \(N_{t+1}=1.2 \,
N_t\), i.e., during March of 2020, the number of individuals
infected with Covid in the UK increased by 20% every day. More
generally, this means that every d days, the number of
infected people increased by a factor of \(1.2^d\).
Q6. Based on the information above, how long did it take for one person infected with Covid to transmit the disease to another person?
Q7. Modify the final date in your code to a later date (say, late April 2020), and rerun the plots. a) Is exponential growth still a good description of the spread of the epidemic? b) Clearly, the rate at which Covid was spreading in the UK changed after March. What real-world events could have caused this change?
While exponential growth is a good approximation to the growth of biological populations under the right conditions, those conditions are never present for too long. As populations grow, they deplete the resources they need to sustain growth. At some point, growing populations run out of room or food. In addition, large populations may become particularly vulnerable to predators, parasites, and pathogens. All these biological processes act to reduce the net growth of the population as it continues to increase until a point is reached where it can no longer grow.
Q8. What would likely happen to the aphid population before Week 14 as the number of insects per plant continued to rise?
Recall that under exponential growth, every week (or other arbitrary unit of time), the population following the rule:
\[N_{t+1} = \lambda N_t\]
where the constant factor \(\lambda = 1 + \textrm{fecundity} - \textrm{mortality}\) nets the initial population size plus the new births minus the new deaths. The explosiveness of exponential growth, and the reason it is unrealistic and unsustainable, comes from assuming that \(\lambda\) is constant no matter how large \(N\) gets. In biological terms, it is unrealistic to assume that the fecundity (number of offspring per adult) and mortality (proportion of individuals who die) are independent of the population size.
Note: Greek letters like \(\lambda\) (lambda) are commonly used in scientific models. If you are unfamiliar with Greek letter names and symbols, refer to the table at the end of this lab write-up.
In nature, fecundity and mortality depend on the amount of resource available to the population, among other things. Suppose a population grows on a resource whose initial concentration is \(F\) (F for food). If each individual in a population consumes an amount \(c\) of resource, then a population with \(N\) individuals has eaten an amount \(cN\) of the resource, such that the remaining resource is \(F - cN\). Assuming the growth rate (i.e. the balance between fecundity and mortality) is proportional to the remaining amount of resource, we have \(\textrm{fecundity - mortality} = g\,(F-cN)\), where the growth constant \(g\) quantifies how fast consumed resource is converted into new consumer biomass. Substituting this in the general expression for population growth, we get
\[ N_{t+1} = (1+ g\,(F-cN_t)) N_t \] Since now the net difference between births and deaths decreases as the population increases, there will come a point where more will beget less, unlike exponential growth where more always begets more.
Q9. Assuming the population is initially small and starts growing as it consumes the resources, what happens if it reaches \(F/c\) individuals? How much available resource is left at that point? What happens if the population surpasses \(F/c\) individuals?
It is convenient to write the expression above in a different form:
\[ N_{t+1}-N_t=r\left(1-\frac{N_t}{K}\right)N_t \] which we arrive at by defining new constants \(r = gF\) and \(K = \frac{F}{c}\). This type of population growth is called logistic growth. The reason it is convenient to write it that way is that these two constants are readily identified as the population size where growth stops – the so-called carrying capacity of the population – and the growth rate when the population is very small – the so called intrinsic growth rate of the population. \(r\) is thus the maximum growth rate the population is capable of achieving, which occurs when it is not yet limited by resource scarcity.
The following table lists the quantities in our model and their meanings:
| Quantity | Meaning | Unit |
|---|---|---|
| \(N_t\) | number of aphids at week \(t\) | no unit |
| \(N_{t+1}\) | number of aphids at week \(t+1\) | no unit |
| \(r\) | intrinsic growth rate of the aphid population: the maximum rate of growth the population is capable of | \([\textrm{time}]^{-1}\) |
| \(K\) | carrying capacity of the aphid population | no unit |
Q10. If the current population size is much smaller than the carrying capacity, ie if \(\frac{N_t}{K}\ll 1\), what simplification can we make to the expression for logistic growth? What do you conclude from this?
Note: Strictly speaking, the expression above is called the logistic map. Logistic growth is properly defined as the solution to the logistic equation, \(\frac{dN}{dt}=r\left(1-\frac{N}{K}\right)N\), which bears obvious resemblance to the logistic map, \(N_{t+1}-N_t=r\left(1-\frac{N_t}{K}\right)N_t\). The difference is that in the logistic equation, time is continuous rather than a discrete variable measured in fixed intervals. Throughout the course I will not be making a strong distinction between continuous- and discrete-time models, but you should be aware that in general there are important differences in outcomes between similar-looking discrete- and continuous-time dynamical models.
Figure: Pierre Francois Verhulst (1804-1849), who introduced logistic growth to population ecology as the simplest form of self-limited growth.
Logistic growth was discovered in the context of microbial growth. As such, there’s no better place to look for real-world examples of it. Let’s load a dataset reporting results of an experiment on bacterial growth on plates treated with an antibiotic, run by Claudia Seiler at the Dresden University of Technology.
Figure: Pseudomonas putida, a common soil bacterium, which can be cultured in Petri dishes. Photos by Science Photo Library and Science.
dat =
bactgrowth |>
as_tibble()
The data frame contains the following columns:
dat
## # A tibble: 2,232 × 5
## strain replicate conc time value
## <fct> <int> <dbl> <int> <dbl>
## 1 T 2 0 0 0.013
## 2 T 2 0 1 0.014
## 3 T 2 0 2 0.017
## 4 T 2 0 3 0.022
## 5 T 2 0 4 0.03
## 6 T 2 0 5 0.039
## 7 T 2 0 6 0.042
## 8 T 2 0 7 0.045
## 9 T 2 0 8 0.048
## 10 T 2 0 9 0.049
## # … with 2,222 more rows
The experiment was designed to answer questions about growth of different bacterial strains under varying concentrations of the antibiotic tetracylcine. However, for our purposes it suffices to pick out one treatment from the experiment. As such, let’s filter the data set to a particular strain in a particular antibiotic concentration in one of the experimental replicates.
dat_example =
dat |>
filter(strain == 'T', conc == .24, replicate == 1)
Here’s what the growth looks like during the first 6 hours:
dat_example_first6hours =
dat_example |>
filter(time <= 6)
dat_example_first6hours |>
ggplot(aes(time, value)) +
geom_point()
This looks like exponential growth! Indeed, we can fit an exponential
growth model to it by using the function fit_growthmodel
from the growthrates package, and then superimpose the
predicted growth on the observations for a visual check of the
goodness-of-fit.
fit_exponential =
with(
dat_example_first6hours,
fit_growthmodel(FUN = grow_exponential, p = c(y0 = .01, mumax = .2), time, value)
)
predicted_data =
predict(fit_exponential, dat_example_first6hours) |>
as_tibble()
dat_example_first6hours |>
ggplot(aes(time, value)) +
geom_line(aes(time, y), data = predicted_data, color = 'red') +
geom_point()
Not a bad fit!
However, as we let the bacteria continue to grow on the plate past the first 6 hours, the plot starts looking qualitatively different. Here’s the growth chart after the first 12 hours:
dat_example_first12hours =
dat_example |>
filter(time <= 12)
dat_example_first12hours |>
ggplot(aes(time, value)) +
geom_point()
Clearly, the growth slows down, and we get an S-shaped growth curve.
After 30 hours, we have:
dat_example |>
ggplot(aes(time, value)) +
geom_point()
We can fit a logistic model to this, and superimpose both our exponential and logistic fits to the full time series:
fit_logistic =
with(
dat_example,
fit_growthmodel(FUN = grow_logistic, p = c(y0 = .01, mumax = .2, K = .06), time, value)
)
dat_example |>
ggplot(aes(time, value)) +
geom_line(aes(time, y), as_tibble(predict(fit_logistic, dat_example)), color = 'red') +
geom_line(aes(time, y), as_tibble(predict(fit_exponential, dat_example_first12hours)), color = 'blue') +
geom_point()
Between the two models, logistic growth is clearly the better description. While fitting the initial stage of the bacterial growth pretty well, the exponential growth model vastly overpredicts what ultimately happens.
Q11. How do you explain what happened biologically?
The plot below shows the entire set of treatments from replicate 1 of the experiment. Although the curves differ in the details of their shapes – as they should since the point of the experiment is to look at different types of bacteria growing under different circumstances – most of them share a common feature: growth is exponential-like at first but eventually slows down.
dat |>
filter(replicate == 1) |>
ggplot(aes(time, value, group = strain, color = strain)) +
geom_point() +
facet_wrap(~conc, scales = 'free')
Q12. What would ultimately happen if the experiment persisted beyond 30 days?
Q13. Show the plots for replicate 2 of the experiment.
The world population has experienced continuous growth since the 1300s, and accelerated growth since the 1800s due to events like the Agriculture and Industrial Revolutions. If we focus on the last 70 years or so of human population growth, is it still at a stage resembling exponential growth, or is it showing signs of slowing down? If the latter, can we predict the world’s eventual human population?
World population since 1800, chart by Bdm25
Let us try to answer these questions using data from the United Nations, available here. You can load the data directly from my GitHub page by running the code below.
github_address = 'https://github.com/rafaeldandrea/BIO-356/blob/master/WPP2019_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx?raw=true'
UN_popdata_1950_to_2020 =
rio::import(github_address, range = 'F17:BZ18') |>
select(-'Parent code') |>
pivot_longer(-Type, names_to = 'year', values_to = 'population') |>
select(-Type) |>
mutate(
year = as.numeric(year),
population = population / 1e6
)
First, let’s look at our data:
UN_popdata_1950_to_2020
## # A tibble: 71 × 2
## year population
## <dbl> <dbl>
## 1 1950 2.54
## 2 1951 2.58
## 3 1952 2.63
## 4 1953 2.68
## 5 1954 2.72
## 6 1955 2.77
## 7 1956 2.82
## 8 1957 2.87
## 9 1958 2.93
## 10 1959 2.98
## # … with 61 more rows
Notice that the population is given in billions.
We can plot this data directly using ggplot:
UN_popdata_1950_to_2020 |>
ggplot(aes(year, population)) +
geom_point() +
ylab('population (billions)') +
theme(aspect.ratio = 1)
Clearly, the world population is rising. But is it rising at a constant rate, as in exponential growth, is the growth rate becoming larger every year, possibly due to fertility hikes accompanying improvements in nutrition and sanitation, or is it slowing down, as in logistic growth?
To answer that question, we want to plot the growth rate against time. Let’s then create a new column for our tibble:
UN_popdata_1950_to_2020 %<>%
mutate(
population_nextyear = dplyr::lead(population),
growth_rate = 100 * (population_nextyear - population) / population
)
In words, we are calculating the annual growth rate for year \(t\) as \(\frac{N_{t+1}-N_t}{N_t}\), multiplied by 100 to write it as a percentage. Plotting the results:
plot_growth_vs_pop =
UN_popdata_1950_to_2020 |>
filter(year < 2020) |>
ggplot(aes(population, growth_rate, color = year)) +
geom_point() +
labs(
x = 'population (billions)',
y = 'annual growth rate (%)'
) +
theme(legend.position = 'none', aspect.ratio = 1) +
scale_color_gradient(low = 'blue', high = 'red')
plot_growth_vs_pop
Notice that here we are plotting the annual growth rate against \(N_t\), not time itself, although time here flows towards the bottom-right, as indicated by the color gradient across the data point (1950 = blue, 2020 = red).
This is real-world data, and therefore it is not reasonable to expect a neat, simple, shape like a straight line. However, it should be clear that the global growth rate has not been constant over the past 70 years. Rather, it has been in decline, especially over the last 30 years. This rules out exponential growth. Humanity is definitely still growing, but it is putting its foot on the brake.
Assuming the growth rate continues to decline at a similar pace, it will eventually reach zero. Can we predict how big the world population will be when that happens?
To project the future based on the past we need a model. Upon examination of the plot above, it doesn’t seem outrageous to fit a line to this plot:
\[ \textrm{annual growth rate} \equiv \frac{N_{t+1}-N_t}{N_t} = a + b N_t \]
where \(a\) is the intercept and \(b\) is the slope.
linear_model = lm(growth_rate ~ population, data = UN_popdata_1950_to_2020)
plot_growth_vs_pop +
geom_abline(
intercept = coef(linear_model)[1],
slope = coef(linear_model)[2],
color = 'darkgrey'
)
With our line, we can now predict the population value at zero growth by finding the value where the line crosses the x-axis. In other words, we can get our answer by solving the expression \(0 = a + bN_t\) for the population \(N_t\).
Q14. Calculate the population at zero growth by solving the
equation above. You can retrieve the intercept and slope of the line
using the command coef(linear_model).
Q15. The statistical model \(\frac{N_{t+1}-N_t}{N_t} = a + b N_t\) that we are fitting to our UN data corresponds to one of the models of population growth we have discussed. Name that model, and by rearranging terms, identify how \(a\) and \(b\) relate to quantities associated with that model.
The UN actually makes their own projections for world population growth out to 2100, using several different forecasting models that make different assumptions about average fertility rates, mortality, and international migration. You can learn about them here. Below is the forecast from one of those models, called the “medium variant” model.
Notice the S-shape of the population curve. Also notice how the annual growth rate is approaching zero by 2100. From either plot you can assess how your estimate of the world population at zero growth compares against the UN’s projection. Was it close?
By now you are probably persuaded that self-limitation to population growth is a very important force in population ecology. Another field where understanding self-limited growth is critical is agriculture.
Assuming fixed prices, a farmer’s income will be proportional to their crop’s yield—i.e. the total harvestable weight per planted area. Consider a maize farm. Suppose each corn stalk weighs \(w\) kilograms, and the farmer plants \(\rho\) corn stalks per acre. The yield is then \(Y=w\rho\) kilograms per acre. A farmer usually can’t change their farm’s acreage, but they can increase the planting density \(\rho\) by planting more corn stalks per acre. It seems like a no-brainer that to maximize \(Y\) the farmer should make \(\rho\) as large as physically possible. Unfortunately, the weight per plant \(w\) typically decreases when we increase the planting density \(\rho\), because crowded plants become nutrient-limited. Therefore, in order to maximize the yield, we need to work out the exact relationship between yield and planting density.
Given its economic importance, the so-called yield-density relationship has been extensively studied, and here we are going to specifically examine how carbon- and nitrogen- limitation in plants may contribute to this relationship. We will follow the work of Thornley 1983.
Suppose an individual plant has a maximum harvestable weight \(w_0\). This is the weight the plant would achieve if grown with unlimited access to resources, and is therefore limited only by the plant’s genetics. In the field, however, the total available resource must be shared between all plants. We can then write a model of the actualized weight of an individual as
\[ w=\frac{w_0}{1 + \textrm{C limitation} + \textrm{N limitation} + \textrm{N}\cdot\textrm{C limitation}} \] where the terms in the denominator quantify the amount of carbon limitation, nitrogen limitation, and the potential interaction between carbon and nitrogen limitation. We now must specify how these nutrient limitations relate to the crop density, ie the number of plants per unit area in the farm.
We will assume that nutrient limitation is inversely proportional to the amount of nutrient available to the plant. For example, if we double the amount of nitrogen available to a plant, its nitrogen limitation is cut in half.
Let’s start with carbon. Plants absorb carbon in their leaves via photosynthesis. Total photosynthetic production per plant is proportional to the amount of available light, which in turn is proportional to the area \(a\) occupied by the plant in the field. (This assumes no light incidence from the sides of the plant, which is a good approximation in crops where plants are arranged in dense rows.) In other words, \(\textrm{C per plant} \propto a\), from which we conclude that \(\textrm{C limitation} \propto \frac{1}{a}\)
Q16. Suppose a farm of area \(A\) has \(n\) corn stalks. By definition, the density \(\rho\) is the number of plants divided by the area. Show that the area available to each plant is \(1/\rho\).
From Q16 we conclude that carbon limitation is proportional to \(\rho\), and we write \(\textrm{C limitation} = k_C \, \rho\), where the proportionality constant \(k_C\) quantifies the importance of \(C\) limitation to the final weight of the plant.
Now we can tackle nitrogen limitation. Plants absorb nitrogen from the soil via their roots. In many crops the roots grow as much sideways as downwards (see Fig. below), such that the root system of an individual fits inside a square box. If the area on the ground occupied by a corn stalk is \(a\), the root box underneath it will have volume \(a^{3/2}\). We conclude \(\textrm{N per plant} \propto a^{3/2}\), and therefore \(\textrm{N limitation} \propto \left(\frac{1}{a}\right)^{3/2}\). From here we can follow the same logic as above and conclude that \(\textrm{N limitation} = k_N \; \rho^{3/2}\), where the constant \(k_N\) quantifies the importance of nitrogen limitation to the final weight of the plant.
Lastly, by the same token one can show that the combined limitation of both nutrients is \(\textrm{N}\cdot\textrm{C limitation} = k_{NC} \; \rho^{5/2}\).
Maize root system, from Leitner et al 2014, Plant Physiology 164(1):24-35
Having worked this all out, we can write the plant weight in terms of the crop density:
\[ w = \frac{w_0}{1 + k_C\; \rho + k_N\;\rho^{3/2}+k_{NC}\;\rho^{5/2}} \]
With that, we can write the yield \(Y=w \, \rho\) as a function of constants and the crop density \(\rho\):
\[ Y = \frac{w_0\,\rho}{1 + k_C\; \rho + k_N\;\rho^{3/2}+k_{NC}\;\rho^{5/2}} \]
This gives us a quantitative prediction of the yield-density relationship. Notice that if the constants \(k_C\), \(k_N\), and \(k_{NC}\) are all zero, we get \(w = w_0\) and \(Y = w_0\rho\), i.e. plants grow to their maximum potential and the yield scales in proportion to the crop density (e.g. doubling the density will double the yield).
The following table lists the quantities in our model and their meanings:
| Quantity | Meaning | Unit |
|---|---|---|
| \(a\) | ground area available to each plant | \(m^2\) |
| \(\rho\) | crop density, i.e. number of plants per unit area, equal to \(1/a\) | \(m^{-2}\) |
| \(w_0\) | maximum weight of an individual plant, determined by genetics | \(kg\) |
| \(w\) | actual weight of an individual plant given nutrient limitation | \(kg\) |
| \(Y\) | yield, defined as total plant weight, per unit area of the field, equal to \(w \rho\) | \(kg/m^2\) |
| \(k_C\), \(k_N\), \(k_{NC}\) | importance of the respective nutrient limitation | - |
The R code below defines a yield function according to our model, then calculates and plots the yield as a function of the crop density in three scenarios, showing the effect of each of the nutrient limitation terms in isolation.
Yield = function(rho, kc, kn, knc){
100 * rho / (1 + kc * rho + kn * rho ^ 1.5 + knc * rho ^ 2.5)
}
dat =
tibble(
density = seq(0, 10, l = 1000),
`N-limited` = Yield(rho = density, kc = 0, kn = 1, knc = 0),
`C-limited` = Yield(rho = density, kc = 1, kn = 0, knc = 0),
`NC-limited` = Yield(rho = density, kc = 0, kn = 0, knc = 1)
) |>
pivot_longer(-density, names_to = 'scenario', values_to = 'yield')
dat |>
ggplot(aes(density, yield, group = scenario, color = scenario)) +
geom_line(linewidth = 1) +
labs(
x = 'density (plants per unit area)',
y = 'yield (harvestable plant mass per unit area)'
)
As you can see, nitrogen and carbon limitation have qualitatively different impacts on yield.
Q17. Based on the plot above, if the farmer has access to unlimited amounts of fertilizer, what would you advise regarding planting density?
Q18. Based on the plot above, in a more realistic scenario where fertilizer is in limited supply, what should the farmer do?
Based on our expression for the yield as a function of the crop
density, we conclude after some calculus that the yield is maximized
when the crop density \(\rho\)
satisfies the equation \(k_N\rho^{3/2}+3k_{NC}\rho^{5/2}-2=0\). The
function optimal_density defined below numerically
calculates this optimal value given a set of \(k_C\), \(k_N\), \(k_{NC}\):
optimal_density = function(kn, kc, knc){
uniroot(f = function(rho) kn * rho ^ 1.5 + 3 * knc * rho ^ 2.5 - 2, interval = c(0, 50))$root
}
Q19. Suppose the importance of carbon limitation, nitrogen
limitation, and interactive carbon and nitrogen limitation are
respectively \(k_C=0.1\), \(k_N=0.01\), and \(k_{NC}=0.005\). Use the function
optimal_density to calculate the optimal number of corn
plants per square meter on this farm. Assign this value to an object
called optimal_rho. Then use the code below to plot the
yield as a function of density, and confirm that the yield peaks at the
\(\rho\) value you obtained.
test =
tibble(
density = seq(0, 30, l = 1000),
yield = Yield(rho = density, kc = .1, kn = .01, knc = .005)
)
test |>
ggplot(aes(density, yield)) +
geom_line() +
geom_vline(aes(xintercept = optimal_rho), color = 'red') +
theme(aspect.ratio = 1)
Q20: In the model above, nutrient limitation creates a negative relationship between plant density and plant size. This provides a mechanistic explanation for which phenomenon named in lecture?
A herring fishery in Canada. Credit: Ian MacAllister/Pacific Wild
Consider a fish population undergoing logistic growth,
\[ \frac{dN}{dt} = r \left(1 - \frac{N}{K} \right) N \] In the context of fisheries, the population size \(N\) is called the stock. If we do nothing, we can expect the stock to equilibrate to its carrying capacity \(K\). But now suppose we regularly harvest a portion of the population for commercial purposes. The fishing effectively acts as additional mortality to the fish population, and we can thus rewrite our expression for the growth of this population like this:
\[ \frac{dN}{dt} = r \left(1 - \frac{N}{K} \right) N - qEN \]
where the added term quantifies the number of fish removed from the population—i.e., the catch. The parameter \(E\) here represents our fishing effort (for example, the number of fishing boats or the number of fishing nets per boat). The parameter \(q\) is a proportionality constant: it tells us how many fish you catch per net you cast per fish in the stock. While we can’t control \(q\), we can control \(E\), and we want to know what is the optimal fishing effort—that is, the fishing effort that leads to the maximal possible catch. How would you go about doing this?
Note that the catch is proportional to both the fishing effort and the stock. Therefore, in order to calculate the catch, we must know the stock. That will be the equilibrium population size given this added mortality, \(N^*(E)\). So our first job is to calculate that equilibrium stock.
Q21: Calculate the equilibrium stock given fishing effort \(E\). Show your work. Hint: solve the dynamic equation above for equilibrium, i.e. equate the right-hand side to zero and solve for \(N\).
Next, to calculate the catch as a function of \(E\) we simply plug the answer to Q21 into the expression for the catch. Doing so, we arrive at
\[ \textrm{catch} = qE\left(1 - \frac{qE}{r}\right)K \] Notice that this a quadratic function of the fishing effort \(E\). Plotting the catch against \(E\) gives
where I set \(q = 0.01\) (i.e. one caught fish per net per 100 fish in the stock), \(K = 1\) million fish, and \(r = 1\).
Q22: Based on the plot above, what is the optimal number of fishing nets for this fishery? How many fish are caught under that fishing effort? Why does the catch decline beyond that fishing effort? Shouldn’t more nets always yield more caught fish?
Now let’s add a bit more realism to this picture. We will consider that the fish population may have a minimum viable size, below which the population would collapse even in the absence of harvesting. In other words, we will suppose that this population undergoes an Allee effect.
We can implement an Allee effect by modifying the growth equation as follows:
\[ \frac{dN}{dt} = r \left(\frac{N}{A}-1\right)\left(1 - \frac{N}{K} \right) N - qEN \]
where \(A\) is the minimum viable size for this population—the Allee threshold.
Now let’s ask the same question: what is the optimal fishing effort in this fishery? We proceed the same way as before, first calculating the equilibrium stock given the fishing effort, then plugging the result in the expression for the catch. Plotting the result gives the following:
Something very interesting happens: as we increase the fishing effort, the catch initially increases, then peaks, then abruptly crashes down to zero! Whereas in our previous model we saw a gradual decline in the catch past the optimal fishing effort, here the catch falls off a cliff. This is very alarming!
Q23: Using your understanding of the ecological concepts we learned this week, explain why the catch drops so precipitously past the optimal fishing effort. Hint: Allee effects.
Q24: Explain why this huge sensitivity of the catch beyond the optimal fishing effort is alarming from a sustainability standpoint. Hint: Describe what would happen if the fishery tried to arrive at the optimal fishing effort by trial-and-error, increasing the number of nets gradually until the catch no longer increases.
The phenomenon above, whereby a small change in a parameter leads to a dramatic change in the outcome, is called a bifurcation. The value of the parameter at which the bifurcation occurs is called the breaking point. Bifurcations are very common in nonlinear dynamic systems. Another phenomenon common to nonlinear systems is chaos: when outcomes are very sensitive to initial conditions. We won’t delve any further into bifurcations and won’t look at all into chaos in this course. Unfortunately for ecologists and conservationists, ecological systems are very much nonlinear, and the presence of bifurcations and chaos severely undermines the predictability of those systems.
| Name | Symbol (upper case, lower case) |
|---|---|
| alpha | \(A, \alpha\) |
| beta | \(B, \beta\) |
| gamma | \(\Gamma, \gamma\) |
| delta | \(\Delta, \delta\) |
| epsilon | \(E, \epsilon\) |
| zeta | \(Z, \zeta\) |
| eta | \(H, \eta\) |
| theta | \(\Theta, \theta\) |
| iota | \(I, \iota\) |
| kappa | \(K, \kappa\) |
| lambda | \(\Lambda, \lambda\) |
| mu | \(M, \mu\) |
| nu | \(N, \nu\) |
| xi | \(\Xi, \xi\) |
| omicron | \(O, \omicron\) |
| pi | \(\Pi, \pi\) |
| rho | \(P, \rho\) |
| sigma | \(\Sigma, \sigma\) |
| tau | \(T , \tau\) |
| upsilon | \(\Upsilon, \upsilon\) |
| phi | \(\Phi, \phi\) |
| chi | \(X, \chi\) |
| psi | \(\Psi, \psi\) |
| omega | \(\Omega, \omega\) |